1,061 research outputs found

    Feasibility analysis of design for remanufacturing in bearing using hybrid fuzzy-topsis and taguchi optimization

    Get PDF
    The tremendous advancement in technology, productivity and improved standard of living has come at the cost of environmental deterioration, increased energy and raw material consumption. In this regard, remanufacturing is viable option to reduce energy usage, carbon footprint and raw material usage. In this manuscript, using computational intelligence techniques we try to determine the feasibility of remanufacturing in case of roller bearings. We collected used N308 bearings from 5 different Indian cities. Using Fuzzy-TOPSIS, we found that the roundness, surface roughness and weight play a vital role in design for remanufacturing of roller bearings. Change in diameter, change in thickness and change in width showed minimal influence.  We also used Taguchi analysis to reassess the problem. The roundness of inner and outer race was found to be the most influential parameters in deciding the selection of bearing for remanufacturing. The results suggest the bearing designer to design the bearing in such a way that roundness of both races will be taken cared while manufacturing a bearing. However, using Taguchi the weight of the rollers was found to be of least influence. Overall, the predictions of Taguchi analysis were found to be similar to Fuzzy-TOPSIS analysis

    Criteria for the experimental observation of multi-dimensional optical solitons in saturable media

    Full text link
    Criteria for experimental observation of multi-dimensional optical solitons in media with saturable refractive nonlinearities are developed. The criteria are applied to actual material parameters (characterizing the cubic self-focusing and quintic self-defocusing nonlinearities, two-photon loss, and optical-damage threshold) for various glasses. This way, we identify operation windows for soliton formation in these glasses. It is found that two-photon absorption sets stringent limits on the windows. We conclude that, while a well-defined window of parameters exists for two-dimensional solitons (spatial or spatiotemporal), for their three-dimensional spatiotemporal counterparts such a window \emph{does not} exist, due to the nonlinear loss in glasses.Comment: 8 pages, to appear in Phys. Rev.

    Holistic Slowdown Driven Scheduling and Resource Management for Malleable Jobs

    Get PDF
    In job scheduling, the concept of malleability has been explored since many years ago. Research shows that malleability improves system performance, but its utilization in HPC never became widespread. The causes are the difficulty in developing malleable applications, and the lack of support and integration of the different layers of the HPC software stack. However, in the last years, malleability in job scheduling is becoming more critical because of the increasing complexity of hardware and workloads. In this context, using nodes in an exclusive mode is not always the most efficient solution as in traditional HPC jobs, where applications were highly tuned for static allocations, but offering zero flexibility to dynamic executions. This paper proposes a new holistic, dynamic job scheduling policy, Slowdown Driven (SD-Policy), which exploits the malleability of applications as the key technology to reduce the average slowdown and response time of jobs. SD-Policy is based on backfill and node sharing. It applies malleability to running jobs to make room for jobs that will run with a reduced set of resources, only when the estimated slowdown improves over the static approach. We implemented SD-Policy in SLURM and evaluated it in a real production environment, and with a simulator using workloads of up to 198K jobs. Results show better resource utilization with the reduction of makespan, response time, slowdown, and energy consumption, up to respectively 7%, 50%, 70%, and 6%, for the evaluated workloads

    Discovering Valuable Items from Massive Data

    Full text link
    Suppose there is a large collection of items, each with an associated cost and an inherent utility that is revealed only once we commit to selecting it. Given a budget on the cumulative cost of the selected items, how can we pick a subset of maximal value? This task generalizes several important problems such as multi-arm bandits, active search and the knapsack problem. We present an algorithm, GP-Select, which utilizes prior knowledge about similarity be- tween items, expressed as a kernel function. GP-Select uses Gaussian process prediction to balance exploration (estimating the unknown value of items) and exploitation (selecting items of high value). We extend GP-Select to be able to discover sets that simultaneously have high utility and are diverse. Our preference for diversity can be specified as an arbitrary monotone submodular function that quantifies the diminishing returns obtained when selecting similar items. Furthermore, we exploit the structure of the model updates to achieve an order of magnitude (up to 40X) speedup in our experiments without resorting to approximations. We provide strong guarantees on the performance of GP-Select and apply it to three real-world case studies of industrial relevance: (1) Refreshing a repository of prices in a Global Distribution System for the travel industry, (2) Identifying diverse, binding-affine peptides in a vaccine de- sign task and (3) Maximizing clicks in a web-scale recommender system by recommending items to users

    Predicting application performance using supervised learning on communication features

    Get PDF
    Abstract not provide

    High Temperature Ferromagnetism with Giant Magnetic Moment in Transparent Co-doped SnO2-d

    Get PDF
    Occurrence of room temperature ferromagnetism is demonstrated in pulsed laser deposited thin films of Sn1-xCoxO2-d (x<0.3). Interestingly, films of Sn0.95Co0.05O2-d grown on R-plane sapphire not only exhibit ferromagnetism with a Curie temperature close to 650 K, but also a giant magnetic moment of about 7 Bohr-Magneton/Co, not yet reported in any diluted magnetic semiconductor system. The films are semiconducting and optically highly transparent.Comment: 12 pages, 4 figure

    A Comparative Analysis of Load Balancing Algorithms Applied to a Weather Forecast Model

    Full text link
    Among the many reasons for load imbalance in weather forecasting models, the dynamic imbalance caused by lo-calized variations on the state of the atmosphere is the hard-est one to handle. As an example, active thunderstorms may substantially increase load at a certain timestep with re-spect to previous timesteps in an unpredictable manner – after all, tracking storms is one of the reasons for running a weather forecasting model. In this paper, we present a com-parative analysis of different load balancing algorithms to deal with this kind of load imbalance. We analyze the im-pact of these strategies on computation and communication and the effects caused by the frequency at which the load balancer is invoked on execution time. This is done with-out any code modification, employing the concept of proces-sor virtualization, which basically means that the domain is over-decomposed and the unit of rebalance is a sub-domain. With this approach, we were able to reduce the execution time of a full, real-world weather model. 1

    Independence in CLP Languages

    Get PDF
    Studying independence of goals has proven very useful in the context of logic programming. In particular, it has provided a formal basis for powerful automatic parallelization tools, since independence ensures that two goals may be evaluated in parallel while preserving correctness and eciency. We extend the concept of independence to constraint logic programs (CLP) and prove that it also ensures the correctness and eciency of the parallel evaluation of independent goals. Independence for CLP languages is more complex than for logic programming as search space preservation is necessary but no longer sucient for ensuring correctness and eciency. Two additional issues arise. The rst is that the cost of constraint solving may depend upon the order constraints are encountered. The second is the need to handle dynamic scheduling. We clarify these issues by proposing various types of search independence and constraint solver independence, and show how they can be combined to allow dierent optimizations, from parallelism to intelligent backtracking. Sucient conditions for independence which can be evaluated \a priori" at run-time are also proposed. Our study also yields new insights into independence in logic programming languages. In particular, we show that search space preservation is not only a sucient but also a necessary condition for ensuring correctness and eciency of parallel execution
    • …
    corecore